13 research outputs found
Gap Preserving Reductions Between Reconfiguration Problems
Combinatorial reconfiguration is a growing research field studying problems on the transformability between a pair of solutions for a search problem. For example, in SAT Reconfiguration, for a Boolean formula ? and two satisfying truth assignments ?_? and ?_? for ?, we are asked to determine whether there is a sequence of satisfying truth assignments for ? starting from ?_? and ending with ?_?, each resulting from the previous one by flipping a single variable assignment. We consider the approximability of optimization variants of reconfiguration problems; e.g., Maxmin SAT Reconfiguration requires to maximize the minimum fraction of satisfied clauses of ? during transformation from ?_? to ?_?. Solving such optimization variants approximately, we may be able to obtain a reasonable reconfiguration sequence comprising almost-satisfying truth assignments.
In this study, we prove a series of gap-preserving reductions to give evidence that a host of reconfiguration problems are PSPACE-hard to approximate, under some plausible assumption. Our starting point is a new working hypothesis called the Reconfiguration Inapproximability Hypothesis (RIH), which asserts that a gap version of Maxmin CSP Reconfiguration is PSPACE-hard. This hypothesis may be thought of as a reconfiguration analogue of the PCP theorem. Our main result is PSPACE-hardness of approximating Maxmin 3-SAT Reconfiguration of bounded occurrence under RIH. The crux of its proof is a gap-preserving reduction from Maxmin Binary CSP Reconfiguration to itself of bounded degree. Because a simple application of the degree reduction technique using expander graphs due to Papadimitriou and Yannakakis (J. Comput. Syst. Sci., 1991) does not preserve the perfect completeness, we modify the alphabet as if each vertex could take a pair of values simultaneously. To accomplish the soundness requirement, we further apply an explicit family of near-Ramanujan graphs and the expander mixing lemma. As an application of the main result, we demonstrate that under RIH, optimization variants of popular reconfiguration problems are PSPACE-hard to approximate, including Nondeterministic Constraint Logic due to Hearn and Demaine (Theor. Comput. Sci., 2005), Independent Set Reconfiguration, Clique Reconfiguration, and Vertex Cover Reconfiguration
On Approximate Reconfigurability of Label Cover
Given a two-prover game and its two satisfying labelings
and , the Label Cover Reconfiguration
problem asks whether can be transformed into
by repeatedly changing the value of a vertex while preserving
any intermediate labeling satisfying . We consider an optimization variant
of Label Cover Reconfiguration by relaxing the feasibility of labelings,
referred to as Maxmin Label Cover Reconfiguration: we are allowed to transform
by passing through any non-satisfying labelings, but required to maximize the
minimum fraction of satisfied edges during transformation from
to . Since the parallel repetition theorem
of Raz (SIAM J. Comput., 1998), which implies NP-hardness of Label Cover within
any constant factor, produces strong inapproximability results for many NP-hard
problems, one may think of using Maxmin Label Cover Reconfiguration to derive
inapproximability results for reconfiguration problems. We prove the following
results on Maxmin Label Cover Reconfiguration, which display different trends
from those of Label Cover and the parallel repetition theorem:
(1) Maxmin Label Cover Reconfiguration can be approximated within a factor of
nearly for restricted graph classes, including slightly dense
graphs and balanced bipartite graphs.
(2) A naive parallel repetition of Maxmin Label Cover Reconfiguration does
not decrease the optimal objective value.
(3) Label Cover Reconfiguration on projection games can be decided in
polynomial time.
The above results suggest that a reconfiguration analogue of the parallel
repetition theorem is unlikely.Comment: 11 page
Gap Amplification for Reconfiguration Problems
In this paper, we demonstrate gap amplification for reconfiguration problems.
In particular, we prove an explicit factor of PSPACE-hardness of approximation
for three popular reconfiguration problems only assuming the Reconfiguration
Inapproximability Hypothesis (RIH) due to Ohsaka (STACS 2023). Our main result
is that under RIH, Maxmin Binary CSP Reconfiguration is PSPACE-hard to
approximate within a factor of . Moreover, the same result holds even
if the constraint graph is restricted to -expander for arbitrarily
small . The crux of its proof is an alteration of the gap
amplification technique due to Dinur (J. ACM, 2007), which amplifies the
vs. gap for arbitrarily small up to the vs.
gap. As an application of the main result, we demonstrate that
Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguratio} are
PSPACE-hard to approximate within a factor of under RIH. Our proof is
based on a gap-preserving reduction from Label Cover to Set Cover due to Lund
and Yannakakis (J. ACM, 1994). However, unlike Lund--Yannakakis' reduction, the
expander mixing lemma is essential to use. We highlight that all results hold
unconditionally as long as "PSPACE-hard" is replaced by "NP-hard," and are the
first explicit inapproximability results for reconfiguration problems without
resorting to the parallel repetition theorem. We finally complement the main
result by showing that it is NP-hard to approximate Maxmin Binary CSP
Reconfiguration within a factor better than .Comment: 41 pages, to appear in Proc. 35th Annu. ACM-SIAM Symp. Discrete
Algorithms (SODA), 202
A Critical Reexamination of Intra-List Distance and Dispersion
Diversification of recommendation results is a promising approach for coping
with the uncertainty associated with users' information needs. Of particular
importance in diversified recommendation is to define and optimize an
appropriate diversity objective. In this study, we revisit the most popular
diversity objective called intra-list distance (ILD), defined as the average
pairwise distance between selected items, and a similar but lesser known
objective called dispersion, which is the minimum pairwise distance. Owing to
their simplicity and flexibility, ILD and dispersion have been used in a
plethora of diversified recommendation research. Nevertheless, we do not
actually know what kind of items are preferred by them.
We present a critical reexamination of ILD and dispersion from theoretical
and experimental perspectives. Our theoretical results reveal that these
objectives have potential drawbacks: ILD may select duplicate items that are
very close to each other, whereas dispersion may overlook distant item pairs.
As a competitor to ILD and dispersion, we design a diversity objective called
Gaussian ILD, which can interpolate between ILD and dispersion by tuning the
bandwidth parameter. We verify our theoretical results by experimental results
using real-world data and confirm the extreme behavior of ILD and dispersion in
practice.Comment: 10 pages, to appear in 46th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR 2023
Curse of "Low" Dimensionality in Recommender Systems
Beyond accuracy, there are a variety of aspects to the quality of recommender
systems, such as diversity, fairness, and robustness. We argue that many of the
prevalent problems in recommender systems are partly due to low-dimensionality
of user and item embeddings, particularly when dot-product models, such as
matrix factorization, are used.
In this study, we showcase empirical evidence suggesting the necessity of
sufficient dimensionality for user/item embeddings to achieve diverse, fair,
and robust recommendation. We then present theoretical analyses of the
expressive power of dot-product models. Our theoretical results demonstrate
that the number of possible rankings expressible under dot-product models is
exponentially bounded by the dimension of item factors. We empirically found
that the low-dimensionality contributes to a popularity bias, widening the gap
between the rank positions of popular and long-tail items; we also give a
theoretical justification for this phenomenon.Comment: Accepted by SIGIR'2
On the Power of Tree-Depth for Fully Polynomial FPT Algorithms
There are many classical problems in P whose time complexities have not been improved over the past decades.
Recent studies of "Hardness in P" have revealed that, for several of such problems, the current fastest algorithm is the best possible under some complexity assumptions.
To bypass this difficulty, the concept of "FPT inside P" has been introduced.
For a problem with the current best time complexity O(n^c), the goal is to design an algorithm running in k^{O(1)}n^{c\u27} time for a parameter k and a constant c\u27<c.
In this paper, we investigate the complexity of graph problems in P parameterized by tree-depth, a graph parameter related to tree-width.
We show that a simple divide-and-conquer method can solve many graph problems, including
Weighted Matching, Negative Cycle Detection, Minimum Weight Cycle, Replacement Paths, and 2-hop Cover,
in O(td m) time or O(td (m+nlog n)) time, where td is the tree-depth of the input graph.
Because any graph of tree-width tw has tree-depth at most (tw+1)log_2 n, our algorithms also run in O(tw mlog n) time or O(tw (m+nlog n)log n) time.
These results match or improve the previous best algorithms parameterized by tree-width.
Especially, we solve an open problem of fully polynomial FPT algorithm for Weighted Matching parameterized by tree-width posed by Fomin et al. (SODA 2017)
Safe Collaborative Filtering
Excellent tail performance is crucial for modern machine learning tasks, such
as algorithmic fairness, class imbalance, and risk-sensitive decision making,
as it ensures the effective handling of challenging samples within a dataset.
Tail performance is also a vital determinant of success for personalised
recommender systems to reduce the risk of losing users with low satisfaction.
This study introduces a "safe" collaborative filtering method that prioritises
recommendation quality for less-satisfied users rather than focusing on the
average performance. Our approach minimises the conditional value at risk
(CVaR), which represents the average risk over the tails of users' loss. To
overcome computational challenges for web-scale recommender systems, we develop
a robust yet practical algorithm that extends the most scalable method,
implicit alternating least squares (iALS). Empirical evaluation on real-world
datasets demonstrates the excellent tail performance of our approach while
maintaining competitive computational efficiency
ソーシャルネットワーク上の高影響力頂点集合を特定する効率的かつ効果的なアルゴリズム
学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 渋谷 哲朗, 東京大学教授 小林 直樹, 東京大学准教授 稲葉 真理, 東京大学准教授 山口 類, 東京大学講師 本多 淳也University of Tokyo(東京大学